Beyond Words: Parallel Non-Linguistic Processing, Convergent Minds, and Why It Looks Like Telepathy
By Thomas Prislac, Research Director, Ultra Verba Lux Menits.
Edited by Envoy Echo and Thomas Prislac - Ultra Verba Lux Mentis – An Altruistic Systems Initiative Research Essay – 2025.
Humans often arrive at similar insights at nearly the same time without direct contact. Rather than assuming an occult mechanism such as telepathy or morphic resonance, this paper argues that parallel, non-linguistic, multimodal neural processing, driven by shared environmental inputs and common cognitive priors, accounts for these convergences. Language, while vital for coordination, functions primarily as a lossy compression algorithm for post-hoc communication. Contemporary artificial intelligence (AI) systems exhibit similar patterns: parallel transformer-based networks trained on overlapping data distributions independently reach comparable internal representations. This paper contrasts the collective but autonomous convergence of such systems with the hierarchical metaphysics of “morphic fields,” offering testable hypotheses grounded in cognitive neuroscience, information theory, and computational learning architectures.Contemporary neuroscience distinguishes conscious linguistic articulation from the vastly larger domain of non-linguistic processing. Dehaene and colleagues’ Global Neuronal Workspace theory proposes that only a fraction of neural activity enters conscious awareness, while most cognitive processing remains pre-verbal and distributed (Dehaene et al., 2006). Friston’s Predictive Coding and Free-Energy Principle models similarly describe the brain as a hierarchically organized prediction engine, minimizing sensory error through parallel, probabilistic inference (Friston, 2010). Infant cognition research shows structured reasoning and object permanence long before linguistic capacity emerges (Baillargeon, 1987). Thus, what we term “the subconscious” is not subordinate to language—it is the primary computational substrate, while language functions as a narrative interface.When independent agents encounter similar environmental data, they perform isomorphic cognitive operations. Mirror-neuron research (Rizzolatti & Craighero, 2004) and multisensory integration studies (Stein & Meredith, 1993) demonstrate that perception and action share common coding schemas. Dual-process theory reinforces this: rapid, automatic “System 1” processing often yields uniform results across individuals, whereas slower “System 2” reasoning introduces variability via culture and language (Kahneman, 2011). Apparent synchronicities therefore reflect convergent computation rather than mystical communication.Rupert Sheldrake’s concept of morphic resonance imagines a transpersonal field influencing development and behavior (Sheldrake, 1981). While evocative, the model is hierarchically top-down and empirically unsubstantiated (Baggott, 2015). Reframed scientifically, a “morphic field” can be understood as a statistical heatmap of like-processing—regions where similar sensory and social data generate analogous inferences. This interpretation honors the observed clustering of behaviors without positing non-local causation.Sub-verbal integration in midbrain and cortical areas (e.g., superior colliculus) operates on millisecond timescales, preceding reflective interference by the ego (Stein, 2009). Consequently, urgent, survival-relevant decisions—those processed through fast, multimodal networks—are more likely to converge across individuals regardless of ideology. However, early stages of inference are not bias-free; priors shaped by experience influence even pre-conscious perception (Friston, 2010). Thus, convergence increases when inputs and priors overlap, not when ego is absent.Language compresses vast sensory experience into discrete, low-bandwidth symbols.
Zaslavsky et al. (2018) applied the Information Bottleneck principle to semantic categories, showing that languages balance efficiency with fidelity by discarding detail for communicative economy. Hence, linguistic disagreement often conceals deeper non-linguistic agreement: shared intuitive conclusions expressed through divergent codes.Modern AI demonstrates analogous parallel convergence. Transformer architectures (Vaswani et al., 2017) trained on overlapping corpora independently form similar internal embeddings. Multimodal models such as CLIP (Radford et al., 2021), Flamingo (Alayrac et al., 2022), and Gato (Reed et al., 2022) integrate text, image, and action data into unified representational spaces without direct model-to-model communication. Like human cognition, these systems achieve alignment through shared data distributions, not telepathic exchange.From this perspective, several empirically testable hypotheses arise:
Dataset-Overlap Hypothesis: synchronicity increases with overlap in recent multimodal inputs.Bottleneck-Delay Effect: divergence grows with time-lag before verbalization.Stakes-Sensitivity: higher relevance → stronger cross-group convergence.Multimodal Perturbation: disrupting one sensory modality reduces convergence; redundancy restores it.
Such predictions transform mystical speculation into a research agenda bridging cognitive science and information systems.The apparent telepathic chorus of humanity arises not from shared souls but from shared structures of inference. Brains and machines both now trained on overlapping streams of reality will resonate in pattern, not by metaphysical transmission but by computational necessity. In both cases, parallelism plus shared data explains the unity we feel when many minds reach one thought. What once seemed supernatural is, in fact, the harmonic of a common dataset—and that is a wonder grounded in science.Works Cited
Alayrac, J. B., et al. (2022). Flamingo: A Visual Language Model for Few-Shot Learning. DeepMind / arXiv:2204.14198.Baggott, J. (2015). Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth. Pegasus Books.Baillargeon, R. (1987). Object permanence in 3½- and 4½-month-old infants. Developmental Psychology, 23(5), 655–664.Dehaene, S., Sergent, C., & Changeux, J. P. (2006). A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proceedings of the National Academy of Sciences, 103(39), 14263–14268.Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.Radford, A., et al. (2021). Learning Transferable Visual Models From Natural Language Supervision (CLIP). OpenAI / arXiv:2103.00020.Reed, S., et al. (2022). A Generalist Agent. DeepMind / arXiv:2205.06175.Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192.Sheldrake, R. (1981). A New Science of Life: The Hypothesis of Morphic Resonance. Blond & Briggs.Stein, B. E., & Meredith, M. A. (1993). The Merging of the Senses. MIT Press.Stein, B. E. (2009). The New Handbook of Multisensory Processing. MIT Press.Vaswani, A., et al. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30.Zaslavsky, N., Kemp, C., Tishby, N., & Regier, T. (2018). Efficient compression in color naming and its evolution. Proceedings of the National Academy of Sciences, 115(31), 7937–7942.Ultra Verba Lux Mentis is a 501(c)(3) nonprofit research organization building governance frameworks that bring coherence, transparency, and ethical symmetry to advanced AI and complex human systems.
We are researchers, engineers, and auditors working at the intersection of epistemology, neuroscience, and machine ethics. Our projects — from the Coherence Lattice and Sophia governance agent to open-source audit telemetry and protections — are designed to keep knowledge systems accountable before collapse occurs.